k-Sparse Autoencoders

نویسندگان

  • Alireza Makhzani
  • Brendan J. Frey
چکیده

Recently, it has been observed that when representations are learnt in a way that encourages sparsity, improved performance is obtained on classification tasks. These methods involve combinations of activation functions, sampling steps and different kinds of penalties. To investigate the effectiveness of sparsity by itself, we propose the “k sparse autoencoder”, which is an autoencoder with linear activation function, where in hidden layers only the k highest activities are kept. When applied to the MNIST and NORB datasets, we find that this method achieves better classification results than denoising autoencoders, networks trained with dropout, and RBMs. k -sparse autoencoders are simple to train and the encoding stage is very fast, making them well-suited to large problem sizes, where conventional sparse coding algorithms cannot be applied.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Sparse Autoencoders in Sentiment Analysis

This paper examines the utilization of sparse autoencoders in the task of sentiment analysis. The autoencoders can be used for pre-training a deep neural network, discovering new features or for dimensionality reduction. In this paper, sparse autoencoders were used for parameters initialization in deep neural network. Experiments showed that the accuracy of text classification to a particular s...

متن کامل

Image Compression: Sparse Coding vs. Bottleneck Autoencoders

Bottleneck autoencoders have been actively researched as a solution to image compression tasks. However, we observed that bottleneck autoencoders produce subjectively low quality reconstructed images. In this work, we explore the ability of sparse coding to improve reconstructed image quality for the same degree of compression. We observe that sparse image compression produces visually superior...

متن کامل

Switched linear encoding with rectified linear autoencoders

Several recent results in machine learning have established formal connections between autoencoders—artificial neural network models that attempt to reproduce their inputs—and other coding models like sparse coding and K-means. This paper explores in depth an autoencoder model that is constructed using rectified linear activations on its hidden units. Our analysis builds on recent results to fu...

متن کامل

Training Autoencoders in Sparse Domain

Autoencoders (AE) are essential in learning representation of large data (like images) for dimensionality reduction. Images are converted to sparse domain using transforms like Fast Fourier Transform (FFT) or Discrete Cosine Transform (DCT) where information that requires encoding is minimal. By optimally selecting the feature-rich frequencies, we are able to learn the latent vectors more robus...

متن کامل

Winner-Take-All Autoencoders

In this paper, we propose a winner-take-all method for learning hierarchical sparse representations in an unsupervised fashion. We first introduce fully-connected winner-take-all autoencoders which use mini-batch statistics to directly enforce a lifetime sparsity in the activations of the hidden units. We then propose the convolutional winner-take-all autoencoder which combines the benefits of ...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

عنوان ژورنال:
  • CoRR

دوره abs/1312.5663  شماره 

صفحات  -

تاریخ انتشار 2013